raw caption
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- North America > United States > Florida > Palm Beach County > West Palm Beach (0.04)
- North America > United States > Florida > Palm Beach County > Palm Beach (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Automobiles & Trucks > Manufacturer (0.94)
- Transportation > Ground > Road (0.94)
- Leisure & Entertainment > Sports (0.68)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- North America > United States > Florida > Palm Beach County > West Palm Beach (0.04)
- North America > United States > Florida > Palm Beach County > Palm Beach (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Automobiles & Trucks > Manufacturer (0.93)
- Transportation > Ground > Road (0.93)
- Leisure & Entertainment > Sports (0.67)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- North America > United States > Florida > Palm Beach County > West Palm Beach (0.04)
- North America > United States > Florida > Palm Beach County > Palm Beach (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Automobiles & Trucks > Manufacturer (0.94)
- Transportation > Ground > Road (0.94)
- Leisure & Entertainment > Sports (0.68)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- North America > United States > Florida > Palm Beach County > West Palm Beach (0.04)
- North America > United States > Florida > Palm Beach County > Palm Beach (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Automobiles & Trucks > Manufacturer (0.93)
- Transportation > Ground > Road (0.93)
- Leisure & Entertainment > Sports (0.67)
Multilingual Diversity Improves Vision-Language Representations
Nguyen, Thao, Wallingford, Matthew, Santy, Sebastin, Ma, Wei-Chiu, Oh, Sewoong, Schmidt, Ludwig, Koh, Pang Wei, Krishna, Ranjay
Massive web-crawled image-text datasets lay the foundation for recent progress in multimodal learning. These datasets are designed with the goal of training a model to do well on standard computer vision benchmarks, many of which, however, have been shown to be English-centric (e.g., ImageNet). Consequently, existing data curation techniques gravitate towards using predominantly English image-text pairs and discard many potentially useful non-English samples. Our work questions this practice. Multilingual data is inherently enriching not only because it provides a gateway to learn about culturally salient concepts, but also because it depicts common concepts differently from monolingual data. We thus conduct a systematic study to explore the performance benefits of using more samples of non-English origins with respect to English vision tasks. By translating all multilingual image-text pairs from a raw web crawl to English and re-filtering them, we increase the prevalence of (translated) multilingual data in the resulting training set. Pre-training on this dataset outperforms using English-only or English-dominated datasets on ImageNet, ImageNet distribution shifts, image-English-text retrieval and on average across 38 tasks from the DataComp benchmark. On a geographically diverse task like GeoDE, we also observe improvements across all regions, with the biggest gain coming from Africa. In addition, we quantitatively show that English and non-English data are significantly different in both image and (translated) text space. We hope that our findings motivate future work to be more intentional about including multicultural and multilingual data, not just when non-English or geographically diverse tasks are involved, but to enhance model capabilities at large.
- Africa (0.25)
- Asia > Japan (0.04)
- North America (0.04)
- (4 more...)
CapsFusion: Rethinking Image-Text Data at Scale
Yu, Qiying, Sun, Quan, Zhang, Xiaosong, Cui, Yufeng, Zhang, Fan, Cao, Yue, Wang, Xinlong, Liu, Jingjing
Large multimodal models demonstrate remarkable generalist ability to perform diverse multimodal tasks in a zero-shot manner. Large-scale web-based image-text pairs contribute fundamentally to this success, but suffer from excessive noise. Recent studies use alternative captions synthesized by captioning models and have achieved notable benchmark performance. However, our experiments reveal significant Scalability Deficiency and World Knowledge Loss issues in models trained with synthetic captions, which have been largely obscured by their initial benchmark success. Upon closer examination, we identify the root cause as the overly-simplified language structure and lack of knowledge details in existing synthetic captions. To provide higher-quality and more scalable multimodal pretraining data, we propose CapsFusion, an advanced framework that leverages large language models to consolidate and refine information from both web-based image-text pairs and synthetic captions. Extensive experiments show that CapsFusion captions exhibit remarkable all-round superiority over existing captions in terms of model performance (e.g., 18.8 and 18.3 improvements in CIDEr score on COCO and NoCaps), sample efficiency (requiring 11-16 times less computation than baselines), world knowledge depth, and scalability. These effectiveness, efficiency and scalability advantages position CapsFusion as a promising candidate for future scaling of LMM training.
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Nevada (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (11 more...)
- Leisure & Entertainment > Sports > Tennis (0.68)
- Leisure & Entertainment > Games > Computer Games (0.46)